23 research outputs found

    2D Sparse Signal Recovery via 2D Orthogonal Matching Pursuit

    Full text link
    Recovery algorithms play a key role in compressive sampling (CS). Most of current CS recovery algorithms are originally designed for one-dimensional (1D) signal, while many practical signals are two-dimensional (2D). By utilizing 2D separable sampling, 2D signal recovery problem can be converted into 1D signal recovery problem so that ordinary 1D recovery algorithms, e.g. orthogonal matching pursuit (OMP), can be applied directly. However, even with 2D separable sampling, the memory usage and complexity at the decoder is still high. This paper develops a novel recovery algorithm called 2D-OMP, which is an extension of 1D-OMP. In the 2D-OMP, each atom in the dictionary is a matrix. At each iteration, the decoder projects the sample matrix onto 2D atoms to select the best matched atom, and then renews the weights for all the already selected atoms via the least squares. We show that 2D-OMP is in fact equivalent to 1D-OMP, but it reduces recovery complexity and memory usage significantly. What's more important, by utilizing the same methodology used in this paper, one can even obtain higher dimensional OMP (say 3D-OMP, etc.) with ease

    GPU acceleration of predictive partitioned vector quantization for ultraspectral sounder data compression

    Get PDF
    [[abstract]]For the large-volume ultraspectral sounder data, compression is desirable to save storage space and transmission time. To retrieve the geophysical paramters without losing precision the ultraspectral sounder data compression has to be lossless. Recently there is a boom on the use of graphic processor units (GPU) for speedup of scientific computations. By identifying the time dominant portions of the code that can be executed in parallel, significant speedup can be achieved by using GPU. Predictive partitioned vector quantization (PPVQ) has been proven to be an effective lossless compression scheme for ultraspectral sounder data. It consists of linear prediction, bit depth partitioning, vector quantization, and entropy coding. Two most time consuming stages of linear prediction and vector quantization are chosen for GPU-based implementation. By exploiting the data parallel characteristics of these two stages, a spatial division design shows a speedup of 72x in our four-GPU-based implementation of the PPVQ compression scheme.[[notice]]補正完畢[[journaltype]]國外[[incitationindex]]SCI[[booktype]]紙本[[countrycodes]]US

    Remote sensing big data computing: challenges and opportunities

    Get PDF
    As we have entered an era of high resolution earth observation, the RS data are undergoing an explosive growth. The proliferation of data also give rise to the increasing complexity of RS data, like the diversity and higher dimensionality characteristic of the data. RS data are regarded as RS ‘‘Big Data’’. Fortunately, we are witness the coming technological leapfrogging. In this paper, we give a brief overview on the Big Data and data-intensive problems, including the analysis of RS Big Data, Big Data challenges, current techniques and works for processing RS Big Data

    Lossless data compression studies for the geostationary imaging Fourier transform spectrometer (GIFTS) with the bias-adjusted reordering preprocessing

    No full text
    [[abstract]]The NASA Geostationary Imaging Fourier Transform Spectrometer (GIFTS) represents a revolutionary step in remote sensing of Earth's atmosphere that will demonstrate the technology and measurement concepts for future NOAA geostationary operational environmental satellites (GOES). GIFTS consists of a 128 x 128 large focal plane array (LFPA) imaging FTS with the spectral coverage from 685 to 1130 cm-1 and 1650 to 2250 cm-1. GIFTS was selected for flight demonstration on NASA's New Millennium Program (NMP) Earth Observing 3 (EO-3) Satellite Mission. GIFTS provides full disk global coverage obtained within an hour at moderate spectral resolutions (e.g. 1.2 cm-1) as well as regional sounding of atmospheric temperature and absorbing gas profiles at high spectral resolution (e.g. 0.6 cm-1). Given the unprecedented data volume produced by GIFTS, lossless data compression is critical for the overall success of the GIFTS experiment where the data is to be disseminated to the user community in real-time and archived for scientific studies and climate assessment. In this paper we will study lossless compression of GIFTS data that has been collected as part of the calibration or Ground Based Tests that were conducted in 2006. Standard compression methods JPEG-2000, JPEG-LS, and CCSDS IDC 9/7M & 5/3 are investigated for compression benchmarks. The bias-adjusted reordering (BAR) preprocessing scheme is also investigated to improve their performance on GIFTS data compression.[[incitationindex]]EI[[conferencetype]]國際[[conferencedate]]20070829~20070829[[conferencelocation]]San Diego, US

    Lossless compression of ultraspectral sounder data using an error-resilient arithmetic coder

    No full text
    [[notice]]本書目待補正[[incitationindex]]EI[[conferencetype]]國

    Lossless compression of the geostationary imaging Fourier transform spectrometer (GIFTS) data via predictive partitioned vector quantization

    No full text
    [[notice]]補正完畢[[conferencetype]]國內[[conferencedate]]20070829~2007082

    Vector quantization with self-resynchronizing coding for lossless compression and rebroadcast of the NASA Geostationary Imaging Fourier Transform Spectrometer (GIFTS) data

    No full text
    [[abstract]]As part of NASA's New Millennium Program, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) is an advanced ultraspectral sounder with a 128x128 array of interferograms for the retrieval of such geophysical parameters as atmospheric temperature, moisture, and wind. With massive data volume that would be generated by future advanced satellite sensors such as GIFTS, chances are that even the state-of-the-art channel coding (e.g. Turbo codes, LDPC) with low BER might not correct all the errors. Due to the error-sensitive ill-posed nature of the retrieval problem, lossless compression with error resilience is desired for ultraspectral sounder data downlink and rebroadcast. Previously, we proposed the fast precomputed vector quantization (FPVQ) with arithmetic coding (AC) which can produce high compression gain for ground operation. In this paper we adopt FPVQ with the reversible variable-length coding (RVLC) to provide better resilience against satellite transmission errors remaining after channel decoding. The FPVQ-RVLC method is compared with the previous FPVQ-AC method for lossless compression of the GIFTS data. The experiment shows that the FPVQ-RVLC method is a significantly better tool for rebroadcast of massive ultraspectral sounder data.[[conferencetype]]國內[[conferencedate]]20080801~20080801[[booktype]]紙本[[iscallforpapers]]Y[[conferencelocation]]San Diego, US

    Real-Time Anomaly Detection Based on a Fast Recursive Kernel RX Algorithm

    No full text
    Abstract: Real-time anomaly detection has received wide attention in remote sensing image processing because many moving targets must be detected on a timely basis. A widely-used anomaly detection algorithm is the Reed-Xiaoli (RX) algorithm that was proposed by Reed and Yu. The kernel RX algorithm proposed by Kwon and Nasrabadi is a nonlinear version of the RX algorithm and outperforms the RX algorithm in terms of detection accuracy. However, the kernel RX algorithm is computationally more expensive. This paper presents a novel real-time anomaly detection framework based on the kernel RX algorithm. In the kernel RX detector, the inverse covariance matrix and the estimated mean of the background data in the kernel space are non-causal and computationally inefficient. In this work, a local causal sliding array window is used to ensure the causality of the detection system. Using the matrix inversion lemma and the Woodbury matrix identity, both the inverse covariance matrix and estimated mean can be recursively derived without extensive repetitive calculations, and, therefore, the real-time kernel RX detector can be implemented and processed pixel-by-pixel in real time. To substantiate its effectiveness and utility in real-time anomaly detection, real hyperspectral data sets are utilized for experiments
    corecore